alignment loss
Align Y our Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization
TPT does not explicitly align the pre-trained CLIP to become aware of the test sample distribution. For the effective test-time adaptation of V -L foundation models, it is crucial to bridge the distribution gap between the pre-training dataset and the downstream evaluation set for high zero-shot generalization.
- Europe > Switzerland > Zürich > Zürich (0.14)
- Europe > Sweden > Östergötland County > Linköping (0.04)
- Europe > Netherlands > South Holland > Delft (0.04)
- Europe > Bosnia and Herzegovina > Federation of Bosnia and Herzegovina > Sarajevo Canton > Sarajevo (0.04)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Asia > South Korea > Gangwon-do > Pyeongchang (0.04)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.94)
- Information Technology (0.93)
- Leisure & Entertainment > Sports (0.67)
- Law Enforcement & Public Safety (0.67)
- (2 more...)
- Research Report > New Finding (0.93)
- Research Report > Experimental Study (0.93)
- Europe > Romania > Sud - Muntenia Development Region > Giurgiu County > Giurgiu (0.04)
- Asia (0.04)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Sensing and Signal Processing > Image Processing (0.68)
Contrastive Time Series Forecasting with Anomalies
Ekstrand, Joel, Taghiyarrenani, Zahra, Nowaczyk, Slawomir
Time-series forecasting predicts future values from past data. In real-world settings, some anomalous events have lasting effects and influence the forecast, while others are short-lived and should be ignored. Standard forecasting models fail to make this distinction, often either overreacting to noise or missing persistent shifts. We propose Co-TSF A (Co ntrastive T ime-Series F orecasting with A nomalies), a regularization framework that learns when to ignore anomalies and when to respond. Co-TSFA generates input-only and input-output augmentations to model forecast-irrelevant and forecast-relevant anomalies, and introduces a latent-output alignment loss that ties representation changes to forecast changes. This encourages invariance to irrelevant perturbations while preserving sensitivity to meaningful distributional shifts. Experiments on the Traffic and Electricity benchmarks, as well as on a real-world cash-demand dataset, demonstrate that Co-TSFA improves performance under anomalous conditions while maintaining accuracy on normal data. An anonymized GitHub repository with the implementation of Co-TSFA is provided at this anonymized GitHub repository and will be made public upon acceptance. Sequence 1 shows an input-only anomaly that should not affect the forecast, whereas Sequence 2 shows an input anomaly that persists into the output (forecast-relevant).
- North America > United States > California (0.04)
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.04)
- Europe > Sweden > Halland County > Halmstad (0.04)
FedTopo: Topology-Informed Representation Alignment in Federated Learning under Non-I.I.D. Conditions
Hu, Ke, Xiang, Liyao, Tang, Peng, Qiu, Weidong
Current federated-learning models deteriorate under heterogeneous (non-I.I.D.) client data, as their feature representations diverge and pixel-or patch-level objectives fail to capture the global topology which is essential for high-dimensional visual tasks. We propose FedT opo, a framework that integrates T opological-Guided Block Screening (TGBS) and T opological Embedding (TE) to leverage topological information, yielding coherently aligned cross-client representations by T opological Alignment Loss (T AL). First, Topology-Guided Block Screening (TGBS) automatically selects the most topology-informative block, i.e., the one with maximal topological separability, whose persistence-based signatures best distinguish within-versus between-class pairs, ensuring that subsequent analysis focuses on topology-rich features. Next, this block yields a compact Topological Embedding, which quantifies the topological information for each client. Finally, a Topological Alignment Loss (T AL) guides clients to maintain topological consistency with the global model during optimization, reducing representation drift across rounds. Experiments on Fashion-MNIST, CIFAR-10, and CIFAR-100 under four non-I.I.D. partitions show that FedTopo accelerates convergence and improves accuracy over strong baselines. Code is available in Supplementary Materials.